Comparative performance analysis of some accelerated and hybrid accelerated gradient models
نویسندگان
چکیده
منابع مشابه
Heuristically-Accelerated Reinforcement Learning: A Comparative Analysis of Performance
This paper presents a comparative analysis of three Reinforcement Learning algorithms (Q-learning, Q(λ)-learning and QSlearning) and their heuristically-accelerated variants (HAQL, HAQ(λ) and HAQS) where heuristics bias action selection, thus speeding up the learning. The experiments were performed in a simulated robot soccer environment which reproduces the conditions of a real competition lea...
متن کاملAccelerated Gradient Boosting
Gradient tree boosting is a prediction algorithm that sequentially produces a model in the form of linear combinations of decision trees, by solving an infinite-dimensional optimization problem. We combine gradient boosting and Nesterov’s accelerated descent to design a new algorithm, which we call AGB (for Accelerated Gradient Boosting). Substantial numerical evidence is provided on both synth...
متن کاملAccelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method
We provide a novel accelerated first-order method that achieves the asymptotically optimal con-vergence rate for smooth functions in the first-order oracle model. To this day, Nesterov’s AcceleratedGradient Descent (agd) and variations thereof were the only methods achieving acceleration in thisstandard blackbox model. In contrast, our algorithm is significantly different from a...
متن کاملSurvival analysis of thalassemia major patients using Cox, Gompertz proportional hazard and Weibull accelerated failure time models
Background: Thalassemia major (TM) is a severe disease and the most common anemia worldwide. The survival time of the disease and its risk factors are of importance for physicians. The present study was conducted to apply the semi-parametric Cox PH model and use parametric proportional hazards (PH) and accelerated failure time (AFT) models to identify the risk factors related to survival of TM ...
متن کاملAsynchronous Accelerated Stochastic Gradient Descent
Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The University Thought - Publication in Natural Sciences
سال: 2019
ISSN: 1450-7226,2560-3094
DOI: 10.5937/univtho9-18174